35 research outputs found

    The Application of Semantic Web Technologies to Content Analysis in Sociology

    Get PDF
    In der Soziologie werden Texte als soziale Phänomene verstanden, die als Mittel zur Analyse von sozialer Wirklichkeit dienen können. Im Laufe der Jahre hat sich eine breite Palette von Techniken in der soziologischen Textanalyse entwickelt, du denen quantitative und qualitative Methoden, sowie vollständig manuelle und computergestützte Ansätze gehören. Die Entwicklung des World Wide Web und sozialer Medien, aber auch technische Entwicklungen wie maschinelle Schrift- und Spracherkennung tragen dazu bei, dass die Menge an verfügbaren und analysierbaren Texten enorm angestiegen ist. Dies führte in den letzten Jahren dazu, dass auch Soziologen auf mehr computergestützte Ansätze zur Textanalyse setzten, wie zum Beispiel statistische ’Natural Language Processing’ (NLP) Techniken. Doch obwohl vielseitige Methoden und Technologien für die soziologische Textanalyse entwickelt wurden, fehlt es an einheitlichen Standards zur Analyse und Veröffentlichung textueller Daten. Dieses Problem führt auch dazu, dass die Transparenz von Analyseprozessen und Wiederverwendbarkeit von Forschungsdaten leidet. Das ’Semantic Web’ und damit einhergehend ’Linked Data’ bieten eine Reihe von Standards zur Darstellung und Organisation von Informationen und Wissen. Diese Standards werden von zahlreichen Anwendungen genutzt, darunter befinden sich auch Methoden zur Veröffentlichung von Daten und ’Named Entity Linking’, eine spezielle Form von NLP. Diese Arbeit versucht die Frage zu diskutieren, in welchem Umfang diese Standards und Tools aus der SemanticWeb- und Linked Data- Community die computergestützte Textanalyse in der Soziologie unterstützen können. Die dafür notwendigen Technologien werden kurz vorgsetellt und danach auf einen Beispieldatensatz der aus Verfassungstexten der Niederlande von 1883 bis 2016 bestand angewendet. Dabei wird demonstriert wie aus den Dokumenten RDF Daten generiert und veröffentlicht werden können, und wie darauf zugegriffen werden kann. Es werden Abfragen erstellt die sich zunächst ausschließlich auf die lokalen Daten beziehen und daraufhin wird demonstriert wie dieses lokale Wissen durch Informationen aus externen Wissensbases angereichert werden kann. Die vorgestellten Ansätze werden im Detail diskutiert und es werden Schnittpunkte für ein mögliches Engagement der Soziologen im Semantic Web Bereich herausgearbeitet, die die vogestellten Analysen und Abfragemöglichkeiten in Zukunft erweitern können

    Smart Media Navigator: Visualizing recommendations based on linked data

    Get PDF
    The growing content in online media libraries results in an increasing challenge for editors to curate their information and make it visible to their users. As a result, the users are confronted with the great amount of content, unable to find the information they are really interested in. The Smart Media Navigator (SMN) aims to analyze a platform’s entire content, enrich it with Linked Data content, and present it to the user in an innovative user interface with the help of semantic web technologies. The SMN enables the user to dynamically explore and navigate media library content

    TRANSRAZ Data Model: Towards a Geosocial Representation of Historical Cities

    Get PDF
    Preserving historical city architectures and making them (publicly) available has emerged as an important field of the cultural heritage and digital humanities research domain. In this context, the TRANSRAZ project is creating an interactive 3D environment of the historical city of Nuremberg which spans over different periods of time. Next to the exploration of the city's historical architecture, TRANSRAZ is also integrating information about its inhabitants, organizations, and important events, which are extracted from historical documents semi-automatically. Knowledge Graphs have proven useful and valuable to integrate and enrich these heterogeneous data. However, this task also comes with versatile data modeling challenges. The TRANSRAZ data model integrates agents, architectural objects, events, and historical documents into the 3D research environment by means of ontologies. Goal is to explore Nuremberg's multifaceted past in different time layers in the context of its architectural, social, economical, and cultural developments

    Semantic Annotation and Information Visualization for Blogposts with refer

    Get PDF
    The growing amount of documents in archives and blogs results in an increasing challenge for curators and authors to tag, present, and recommend their content to the user. refer comprises a set of powerful tools focusing on Named Entity Linking (NEL) which help authors and curators to semi-automatically analyze a platform’s textual content and semantically annotate it based on Linked Open Data. In refer automated NEL is complemented by manual semantic annotation supported by sophisticated autosuggestion of candidate entities, implemented as publicly available Wordpress plugin. In addition, refer visualizes the semantically enriched documents in a novel navigation interface for improved exploration of the entire content across the platform. The efficiency of the presented approach is supported by a qualitative evaluation of the user interfaces

    Towards a representation of temporal data in archival records: Use cases and requirements

    Get PDF
    Archival records are essential sources of information for historians and digital humanists to understand history. For modern information systems they are often analysed and integrated into Knowledge Graphs for better access, interoperability and re-use. However, due to restrictions of the representation of RDF predicates temporal data within archival records is a challenge to model. This position paper explains requirements for modeling temporal data in archival records based on running research projects in which archival records are analysed and integrated in Knowledge Graphs for research and exploration

    Towards a representation of temporal data in archival records: Use cases and requirements

    Get PDF
    Archival records are essential sources of information for historians and digital humanists to understand history. For modern information systems they are often analysed and integrated into Knowledge Graphs for better access, interoperability and re-use. However, due to restrictions of the representation of RDF predicates temporal data within archival records is a challenge to model. This position paper explains requirements for modeling temporal data in archival records based on running research projects in which archival records are analysed and integrated in Knowledge Graphs for research and exploration

    A Proposal for a Two-Way Journey on Validating Locations in Unstructured and Structured Data

    Get PDF
    The Web of Data has grown explosively over the past few years, and as with any dataset, there are bound to be invalid statements in the data, as well as gaps. Natural Language Processing (NLP) is gaining interest to fill gaps in data by transforming (unstructured) text into structured data. However, there is currently a fundamental mismatch in approaches between Linked Data and NLP as the latter is often based on statistical methods, and the former on explicitly modelling knowledge. However, these fields can strengthen each other by joining forces. In this position paper, we argue that using linked data to validate the output of an NLP system, and using textual data to validate Linked Open Data (LOD) cloud statements is a promising research avenue. We illustrate our proposal with a proof of concept on a corpus of historical travel stories

    Knowledge graph enabled curation and exploration of Nuremberg’s city heritage

    Get PDF
    An important part in European cultural identity relies on European cities and in particular on their histories and cultural heritage. Nuremberg, the home of important artists such as Albrecht Dürer and Hans Sachs developed into the epitome of German and European culture already during the Middle Ages. Throughout history, the city experienced a number of transformations, especially with its almost complete destruction during World War 2. This position paper presents TRANSRAZ, a project with the goal to recreate Nuremberg by means of an interactive 3D tool to explore the city’s architecture and culture ranging from the 17th to the 21st century. The goal of this position paper is to discuss the ongoing work of connecting heterogeneous historical data from various sources previously hidden in archives to the 3D model using knowledge graphs for a scientifically accurate interactive exploration on the Web

    Multimodal Search on Iconclass using Vision-Language Pre-Trained Models

    Full text link
    Terminology sources, such as controlled vocabularies, thesauri and classification systems, play a key role in digitizing cultural heritage. However, Information Retrieval (IR) systems that allow to query and explore these lexical resources often lack an adequate representation of the semantics behind the user's search, which can be conveyed through multiple expression modalities (e.g., images, keywords or textual descriptions). This paper presents the implementation of a new search engine for one of the most widely used iconography classification system, Iconclass. The novelty of this system is the use of a pre-trained vision-language model, namely CLIP, to retrieve and explore Iconclass concepts using visual or textual queries

    Knowledge Extraction for Art History: the Case of Vasari’s The Lives of The Artists (1568)

    Get PDF
    Knowledge Extraction (KE) techniques are used to convert unstructured information present in texts to Knowledge Graphs (KGs) which can be queried and explored. Despite their potential for cultural heritage domains, such as Art History, these techniques often encounter limitations if applied to domain-specific data. In this paper we present the main challenges that KE has to face on art-historical texts, by using as case study Giorgio Vasari’s The Lives of The Artists. This paper discusses the following NLP tasks for art-historical texts, namely entity recognition and linking, coreference resolution, time extraction, motif extraction and artwork extraction. Several strategies to annotate art-historical data for these tasks and evaluate NLP models are also proposed
    corecore